Goto

Collaborating Authors

 glance and focus


Glance and Focus: Memory Prompting for Multi-Event Video Question Answering

Neural Information Processing Systems

Video Question Answering (VideoQA) has emerged as a vital tool to evaluate agents' ability to understand human daily behaviors. Despite the recent success of large vision language models in many multi-modal tasks, complex situation reasoning over videos involving multiple human-object interaction events still remains challenging. In contrast, humans can easily tackle it by using a series of episode memories as anchors to quickly locate question-related key moments for reasoning. To mimic this effective reasoning strategy, we propose the Glance-Focus model. One simple way is to apply an action detection model to predict a set of actions as key memories.


Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in Image Classification

Neural Information Processing Systems

The accuracy of deep convolutional neural networks (CNNs) generally improves when fueled with high resolution images. However, this often comes at a high computational cost and high memory footprint. Inspired by the fact that not all regions in an image are task-relevant, we propose a novel framework that performs efficient image classification by processing a sequence of relatively small inputs, which are strategically selected from the original image with reinforcement learning. Such a dynamic decision process naturally facilitates adaptive inference at test time, i.e., it can be terminated once the model is sufficiently confident about its prediction and thus avoids further redundant computation. Notably, our framework is general and flexible as it is compatible with most of the state-of-the-art light-weighted CNNs (such as MobileNets, EfficientNets and RegNets), which can be conveniently deployed as the backbone feature extractor. Experiments on ImageNet show that our method consistently improves the computational efficiency of a wide variety of deep models. For example, it further reduces the average latency of the highly efficient MobileNet-V3 on an iPhone XS Max by 20% without sacrificing accuracy.


Review for NeurIPS paper: Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in Image Classification

Neural Information Processing Systems

Weaknesses: Post rebuttal edit: The extra experiments comparing to random did convince me that GFNet does something beyond random. But I'm still not convinced that GFNets are particularly smart at glancing. Note (from table 1 of the rebuttal) for instance that to reach sota accuracy, it looks like a fovea/glance of size 1/n of the original window seems to need n steps. To me this seems that glancing barely pays for itself. In fact, if you replaced random foveation with deterministic uniform coverage of the image, you may have done better.


Review for NeurIPS paper: Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in Image Classification

Neural Information Processing Systems

Four knowledgeable referees support acceptance for the contribution; they like the authors' novel idea of Glance and Focus net and ImageNet scale evaluations showing superior accuracy-efficiency tradeoff against state-of-the-art-baselines. Please make it sure to properly include and discuss missing references and experimental comparisons, as promised in the rebuttal.


Glance and Focus: Memory Prompting for Multi-Event Video Question Answering

Neural Information Processing Systems

Video Question Answering (VideoQA) has emerged as a vital tool to evaluate agents' ability to understand human daily behaviors. Despite the recent success of large vision language models in many multi-modal tasks, complex situation reasoning over videos involving multiple human-object interaction events still remains challenging. In contrast, humans can easily tackle it by using a series of episode memories as anchors to quickly locate question-related key moments for reasoning. To mimic this effective reasoning strategy, we propose the Glance- Focus model. One simple way is to apply an action detection model to predict a set of actions as key memories.


Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in Image Classification

Neural Information Processing Systems

The accuracy of deep convolutional neural networks (CNNs) generally improves when fueled with high resolution images. However, this often comes at a high computational cost and high memory footprint. Inspired by the fact that not all regions in an image are task-relevant, we propose a novel framework that performs efficient image classification by processing a sequence of relatively small inputs, which are strategically selected from the original image with reinforcement learning. Such a dynamic decision process naturally facilitates adaptive inference at test time, i.e., it can be terminated once the model is sufficiently confident about its prediction and thus avoids further redundant computation. Notably, our framework is general and flexible as it is compatible with most of the state-of-the-art light-weighted CNNs (such as MobileNets, EfficientNets and RegNets), which can be conveniently deployed as the backbone feature extractor.